Goto

Collaborating Authors

 parameter count










PRISM: Lightweight Multivariate Time-Series Classification through Symmetric Multi-Resolution Convolutional Layers

Zucchi, Federico, Lampert, Thomas

arXiv.org Artificial Intelligence

Multivariate time series classification supports applications from wearable sensing to biomedical monitoring and demands models that can capture both short-term patterns and longer-range temporal dependencies. Despite recent advances, Transformer and CNN models often remain computationally heavy and rely on many parameters. This work presents PRISM (Per-channel Resolution Informed Symmetric Module), a lightweight fully convolutional classifier. Operating in a channel-independent manner, in its early stage it applies a set of multi-resolution symmetric convolutional filters. This symmetry enforces structural constraints inspired by linear-phase FIR filters from classical signal processing, effectively halving the number of learnable parameters within the initial layers while preserving the full receptive field. Across the diverse UEA multivariate time-series archive as well as specific benchmarks in human activity recognition, sleep staging, and biomedical signals, PRISM matches or outperforms state-of-the-art CNN and Transformer models while using significantly fewer parameters and markedly lower computational cost. By bringing a principled signal processing prior into a modern neural architecture, PRISM offers an effective and computationally economical solution for multivariate time series classification.1. Introduction Multivariate time series, characterised by intricate temporal dependencies, are common in finance, healthcare, environmental science, and human activity recognition. Deep learning has improved analysis and classification for such data, yet state-of-the-art models often incur high computational cost, heavy pa-rameterisation, and limited robustness in realistic data regimes. Transformer architectures, adapted from NLP for long-range dependencies, have been applied to time series. Despite promising results, their extensive parameter counts can lead to overfitting and high memory use [1]. In practice, self-attention can struggle with noisy, redundant signals [2, 3].


Memory-DD: A Low-Complexity Dendrite-Inspired Neuron for Temporal Prediction Tasks

Yang, Dongjian, Li, Xiaoyuan, Xi, Chuanmei, Sun, Ye, Liu, Gang

arXiv.org Artificial Intelligence

Abstract--Dendrite-inspired neurons have been widely used in tasks such as image classification due to low computational complexity and fast inference speed. T emporal data prediction, as a key machine learning task, plays a key role in real-time scenarios such as sensor data analysis, financial forecasting, and urban traffic management. However, existing dendrite-inspired neurons are mainly designed for static data. Studies on capturing dynamic features and modeling long-term dependencies in temporal sequences remain limited. Efficient architectures specifically designed for temporal sequence prediction are still lacking. In this paper, we propose Memory-DD, a low-complexity dendrite-inspired neuron model. Memory-DD consists of two dendrite-inspired neuron groups that contain no nonlinear activation functions but can still realize nonlinear mappings. Compared with traditional neurons without dendritic functions, Memory-DD requires only two neuron groups to extract logical relationships between features in input sequences. This design effectively captures temporal dependencies and is suitable for both classification and regression tasks on sequence data. Experimental results show that Memory-DD achieves an average accuracy of 89.41% on 18 temporal classification benchmark datasets, outperforming LSTM by 4.25%. On 9 temporal regression datasets, it reaches comparable performance to LSTM, while using only 50% of the parameters and reducing computational complexity (FLOPs) by 27.7%. These results demonstrate that Memory-DD successfully extends the low-complexity advantages of dendrite-inspired neurons to temporal prediction, providing a low-complexity and efficient solution for time-series data processing. ITH the rapid development of information technology, massive temporal sequence data have become the foundation of modern society, ranging from industrial IoT to financial markets.